62 research outputs found
Fast Exact Bayesian Inference for Sparse Signals in the Normal Sequence Model
We consider exact algorithms for Bayesian inference with model selection
priors (including spike-and-slab priors) in the sparse normal sequence model.
Because the best existing exact algorithm becomes numerically unstable for
sample sizes over n=500, there has been much attention for alternative
approaches like approximate algorithms (Gibbs sampling, variational Bayes,
etc.), shrinkage priors (e.g. the Horseshoe prior and the Spike-and-Slab LASSO)
or empirical Bayesian methods. However, by introducing algorithmic ideas from
online sequential prediction, we show that exact calculations are feasible for
much larger sample sizes: for general model selection priors we reach n=25000,
and for certain spike-and-slab priors we can easily reach n=100000. We further
prove a de Finetti-like result for finite sample sizes that characterizes
exactly which model selection priors can be expressed as spike-and-slab priors.
The computational speed and numerical accuracy of the proposed methods are
demonstrated in experiments on simulated data, on a differential gene
expression data set, and to compare the effect of multiple hyper-parameter
settings in the beta-binomial prior. In our experimental evaluation we compute
guaranteed bounds on the numerical accuracy of all new algorithms, which shows
that the proposed methods are numerically reliable whereas an alternative based
on long division is not
R\'enyi Divergence and Kullback-Leibler Divergence
R\'enyi divergence is related to R\'enyi entropy much like Kullback-Leibler
divergence is related to Shannon's entropy, and comes up in many settings. It
was introduced by R\'enyi as a measure of information that satisfies almost the
same axioms as Kullback-Leibler divergence, and depends on a parameter that is
called its order. In particular, the R\'enyi divergence of order 1 equals the
Kullback-Leibler divergence.
We review and extend the most important properties of R\'enyi divergence and
Kullback-Leibler divergence, including convexity, continuity, limits of
-algebras and the relation of the special order 0 to the Gaussian
dichotomy and contiguity. We also show how to generalize the Pythagorean
inequality to orders different from 1, and we extend the known equivalence
between channel capacity and minimax redundancy to continuous channel inputs
(for all orders) and present several other minimax results.Comment: To appear in IEEE Transactions on Information Theor
Second-order Quantile Methods for Experts and Combinatorial Games
We aim to design strategies for sequential decision making that adjust to the
difficulty of the learning problem. We study this question both in the setting
of prediction with expert advice, and for more general combinatorial decision
tasks. We are not satisfied with just guaranteeing minimax regret rates, but we
want our algorithms to perform significantly better on easy data. Two popular
ways to formalize such adaptivity are second-order regret bounds and quantile
bounds. The underlying notions of 'easy data', which may be paraphrased as "the
learning problem has small variance" and "multiple decisions are useful", are
synergetic. But even though there are sophisticated algorithms that exploit one
of the two, no existing algorithm is able to adapt to both.
In this paper we outline a new method for obtaining such adaptive algorithms,
based on a potential function that aggregates a range of learning rates (which
are essential tuning parameters). By choosing the right prior we construct
efficient algorithms and show that they reap both benefits by proving the first
bounds that are both second-order and incorporate quantiles
A Second-order Bound with Excess Losses
We study online aggregation of the predictions of experts, and first show new
second-order regret bounds in the standard setting, which are obtained via a
version of the Prod algorithm (and also a version of the polynomially weighted
average algorithm) with multiple learning rates. These bounds are in terms of
excess losses, the differences between the instantaneous losses suffered by the
algorithm and the ones of a given expert. We then demonstrate the interest of
these bounds in the context of experts that report their confidences as a
number in the interval [0,1] using a generic reduction to the standard setting.
We conclude by two other applications in the standard setting, which improve
the known bounds in case of small excess losses and show a bounded regret
against i.i.d. sequences of losses
Lipschitz Adaptivity with Multiple Learning Rates in Online Learning
We aim to design adaptive online learning algorithms that take advantage of
any special structure that might be present in the learning task at hand, with
as little manual tuning by the user as possible. A fundamental obstacle that
comes up in the design of such adaptive algorithms is to calibrate a so-called
step-size or learning rate hyperparameter depending on variance, gradient
norms, etc. A recent technique promises to overcome this difficulty by
maintaining multiple learning rates in parallel. This technique has been
applied in the MetaGrad algorithm for online convex optimization and the Squint
algorithm for prediction with expert advice. However, in both cases the user
still has to provide in advance a Lipschitz hyperparameter that bounds the norm
of the gradients. Although this hyperparameter is typically not available in
advance, tuning it correctly is crucial: if it is set too small, the methods
may fail completely; but if it is taken too large, performance deteriorates
significantly. In the present work we remove this Lipschitz hyperparameter by
designing new versions of MetaGrad and Squint that adapt to its optimal value
automatically. We achieve this by dynamically updating the set of active
learning rates. For MetaGrad, we further improve the computational efficiency
of handling constraints on the domain of prediction, and we remove the need to
specify the number of rounds in advance.Comment: 22 pages. To appear in COLT 201
Combining Adversarial Guarantees and Stochastic Fast Rates in Online Learning
We consider online learning algorithms that guarantee worst-case regret rates
in adversarial environments (so they can be deployed safely and will perform
robustly), yet adapt optimally to favorable stochastic environments (so they
will perform well in a variety of settings of practical importance). We
quantify the friendliness of stochastic environments by means of the well-known
Bernstein (a.k.a. generalized Tsybakov margin) condition. For two recent
algorithms (Squint for the Hedge setting and MetaGrad for online convex
optimization) we show that the particular form of their data-dependent
individual-sequence regret guarantees implies that they adapt automatically to
the Bernstein parameters of the stochastic environment. We prove that these
algorithms attain fast rates in their respective settings both in expectation
and with high probability
Adaptive Hedge
Most methods for decision-theoretic online learning are based on the Hedge
algorithm, which takes a parameter called the learning rate. In most previous
analyses the learning rate was carefully tuned to obtain optimal worst-case
performance, leading to suboptimal performance on easy instances, for example
when there exists an action that is significantly better than all others. We
propose a new way of setting the learning rate, which adapts to the difficulty
of the learning problem: in the worst case our procedure still guarantees
optimal performance, but on easy instances it achieves much smaller regret. In
particular, our adaptive method achieves constant regret in a probabilistic
setting, when there exists an action that on average obtains strictly smaller
loss than all other actions. We also provide a simulation study comparing our
approach to existing methods.Comment: This is the full version of the paper with the same name that will
appear in Advances in Neural Information Processing Systems 24 (NIPS 2011),
2012. The two papers are identical, except that this version contains an
extra section of Additional Materia
- …